Goto

Collaborating Authors

 practical low-rank communication compression



Practical Low-Rank Communication Compression in Decentralized Deep Learning

Neural Information Processing Systems

Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors. We prove that our method does not require any additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Inspired the PowerSGD algorithm for centralized deep learning, we execute power iteration steps on model differences to maximize the information transferred per bit. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.


Review for NeurIPS paper: Practical Low-Rank Communication Compression in Decentralized Deep Learning

Neural Information Processing Systems

Summary and Contributions: Post-rebuttal update: I am happy with the authors' response to my question on the bounded variance assumption. I maintain that the paper should be accepted. The authors take inspiration from a power-method based compression method for efficient communication in distributed optimization. Instead, they apply this idea to the'decentralized' setting, where communication is limited to neighboring nodes on some network topology. A long known property of the power method is its lightness in terms of hyper parameter tuning.


Review for NeurIPS paper: Practical Low-Rank Communication Compression in Decentralized Deep Learning

Neural Information Processing Systems

All reviewers agreed that this is paper contains novel contributions and should be accepted. The authors are urged to read and address to the extent possible the constructive comments they received.


Practical Low-Rank Communication Compression in Decentralized Deep Learning

Neural Information Processing Systems

Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors. We prove that our method does not require any additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Inspired the PowerSGD algorithm for centralized deep learning, we execute power iteration steps on model differences to maximize the information transferred per bit. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.